87 research outputs found

    Artificial Intelligence: Too Fragile to Fight?

    Get PDF
    The article of record may be found at https://www.usni.org/magazines/proceedings/2022/february/artificial-intelligence-too-fragile-fightInformation Warfare Essay Contest - First PrizeArtificial intelligence (Al) has become the technical focal point for advancing naval and Department of Defense (DoD) capabilities. Secretary of the Navy Carlos Del Toro listed AI first among his priorities for innovating U.S. naval forces. Chief of Naval Operations Admiral Michael Gilday listed it as his top priority during his Senate confirmation hearing. This focus is appropriate: ai/ offers many promising breakthroughs in battlefield capability and agility in decision making. Yet, the proposed advances come with substantial risk: automation-including AI- has persistent, critical vulnerabilities that must be thoroughly understood and adequately addressed if defense applications are to remain resilient and effective.Booz Allen Hamilito

    Understanding, Assessing, and Mitigating Safety Risks in Artificial Intelligence Systems

    Get PDF
    Prepared for: Naval Air Warfare Development Center (NAVAIR)Traditional software safety techniques rely on validating software against a deductively defined specification of how the software should behave in particular situations. In the case of AI systems, specifications are often implicit or inductively defined. Data-driven methods are subject to sampling error since practical datasets cannot provide exhaustive coverage of all possible events in a real physical environment. Traditional software verification and validation approaches may not apply directly to these novel systems, complicating the operation of systems safety analysis (such as implemented in MIL-STD 882). However, AI offers advanced capabilities, and it is desirable to ensure the safety of systems that rely on these capabilities. When AI tech is deployed in a weapon system, robot, or planning system, unwanted events are possible. Several techniques can support the evaluation process for understanding the nature and likelihood of unwanted events in AI systems and making risk decisions on naval employment. This research considers the state of the art, evaluating which ones are most likely to be employable, usable, and correct. Techniques include software analysis, simulation environments, and mathematical determinations.Naval Air Warfare Development CenterNaval Postgraduate School, Naval Research Program (PE 0605853N/2098)Approved for public release. Distribution is unlimite

    Accountability in Computer Systems

    Get PDF
    The article of record as published may be found at http://dx.doi.org/10.1093/oxfordhb/9780190067397.013.10This chapter addresses the relationship between AI systems and the concept of accountability. To understand accountability in the context of AI systems, one must begin by examining the various ways the term is used and the variety of concepts to which it is meant to refer. Accountability is often associated with transparency, the principle that systems and processes should be accessible to those affected through an understanding of their structure or function. For a computer system, this often means disclosure about the system’s existence, nature, and scope; scrutiny of its underlying data and reasoning approaches; and connection of the operative rules implemented by the system to the governing norms of its context. Transparency is a useful tool in the governance of computer systems, but only insofar as it serves accountability. There are other mechanisms available for building computer systems that support accountability of their creators and operators. Ultimately, accountability requires establishing answerability relationships that serve the interests of those affected by AI systems

    This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology

    Get PDF
    The explosion in the use of software in important sociotechnical systems has renewed focus on the study of the way technical constructs reflect policies, norms, and human values. This effort requires the engagement of scholars and practitioners from many disciplines. And yet, these disciplines often conceptualize the operative values very differently while referring to them using the same vocabulary. The resulting conflation of ideas confuses discussions about values in technology at disciplinary boundaries. In the service of improving this situation, this paper examines the value of shared vocabularies, analytics, and other tools that facilitate conversations about values in light of these disciplinary specific conceptualizations, the role such tools play in furthering research and practice, outlines different conceptions of ``fairness''deployed in discussions about computer systems, and provides an analytic tool for interdisciplinary discussions and collaborations around the concept of fairness. We use a case study of risk assessments in criminal justice applications to both motivate our effort--describing how conflation of different concepts under the banner of ``fairness'' led to unproductive confusion--and illustrate the value of the fairness analytic by demonstrating how the rigorous analysis it enables can assist in identifying key areas of theoretical, political, and practical misunderstanding or disagreement, and where desired support alignment or collaboration in the absence of consensus

    System Safety Engineering for Social and Ethical ML Risks: A Case Study

    Full text link
    Governments, industry, and academia have undertaken efforts to identify and mitigate harms in ML-driven systems, with a particular focus on social and ethical risks of ML components in complex sociotechnical systems. However, existing approaches are largely disjointed, ad-hoc and of unknown effectiveness. Systems safety engineering is a well established discipline with a track record of identifying and managing risks in many complex sociotechnical domains. We adopt the natural hypothesis that tools from this domain could serve to enhance risk analyses of ML in its context of use. To test this hypothesis, we apply a "best of breed" systems safety analysis, Systems Theoretic Process Analysis (STPA), to a specific high-consequence system with an important ML-driven component, namely the Prescription Drug Monitoring Programs (PDMPs) operated by many US States, several of which rely on an ML-derived risk score. We focus in particular on how this analysis can extend to identifying social and ethical risks and developing concrete design-level controls to mitigate them.Comment: 14 pages, 5 figures, 3 tables. Accepted to 36th Conference on Neural Information Processing Systems, Workshop on ML Safety (NeurIPS 2022

    Mixcoin Anonymity for Bitcoin with accountable mixes (Full version)

    Get PDF
    Abstract. We propose Mixcoin, a protocol to facilitate anonymous payments in Bitcoin and similar cryptocurrencies. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal. Our scheme is efficient and fully compatible with Bitcoin. Against a passive attacker, our scheme provides an anonymity set of all other users mixing coins contemporaneously. This is an interesting new property with no clear analog in better-studied communication mixes. Against active attackers our scheme offers similar anonymity to traditional communication mixes.
    • …
    corecore